12 research outputs found

    nsroot: Minimalist Process Isolation Tool Implemented With Linux Namespaces

    Get PDF
    Data analyses in the life sciences are moving from tools run on a personal computer to services run on large computing platforms. This creates a need to package tools and dependencies for easy installation, configuration and deployment on distributed platforms. In addition, for secure execution there is a need for process isolation on a shared platform. Existing virtual machine and container technologies are often more complex than traditional Unix utilities, like chroot, and often require root privileges in order to set up or use. This is especially challenging on HPC systems where users typically do not have root access. We therefore present nsroot, a lightweight Linux namespaces based process isolation tool. It allows restricting the runtime environment of data analysis tools that may not have been designed with security as a top priority, in order to reduce the risk and consequences of security breaches, without requiring any special privileges. The codebase of nsroot is small, and it provides a command line interface similar to chroot. It can be used on all Linux kernels that implement user namespaces. In addition, we propose combining nsroot with the AppImage format for secure execution of packaged applications. nsroot is open sourced and available at: https://github.com/uit-no/nsroo

    nsroot: Minimalist process isolation tool implemented with Linux namespaces

    Get PDF
    services run on large computing platforms.. This creates a need to package tools and dependencies for easy installation,, configuration and deployment on distributed platforms.. In addition,, for secure execution there is a need for process isolation on a shared platform.. Existing virtual machine and container technologies are often more complex than trad itional Unix utilities,, like chroot,, and often require root privileges in order to set up or use.. This is especially challenging on HPC systems where users typically do not have root access.. We therefore present nsroot,, a lightweight Linux namespaces based process isolation tool.. It allows restricting the runtime environment of data analysis tools that may not have been designed with security as a top priority,, in order to reduce the risk and consequences of security breaches,, without requiring any special privileges.. The codebase of nsroot is small,, and it provides a command line interface similar to chroot.. It can be used on all Linux kernels that implement user namespaces.. In addition,, we propose combining nsroot with the AppImage format for secure execu tion of packaged applications.. nsroot is open sourced and available at:: https://github.com/uit-no/nsroot

    Work Extraction and Landauer's Principle in a Quantum Spin Hall Device

    Get PDF
    Landauer's principle states that erasure of each bit of information in a system requires at least a unit of energy kBTln2k_B T \ln 2 to be dissipated. In return, the blank bit may possibly be utilized to extract usable work of the amount kBTln2k_B T \ln 2, in keeping with the second law of thermodynamics. While in principle any collection of spins can be utilized as information storage, work extraction by utilizing this resource in principle requires specialized engines that are capable of using this resource. In this work, we focus on heat and charge transport in a quantum spin Hall device in the presence of a spin bath. We show how a properly initialized nuclear spin subsystem can be used as a memory resource for a Maxwell's Demon to harvest available heat energy from the reservoirs to induce charge current that can power an external electrical load. We also show how to initialize the nuclear spin subsystem using applied bias currents which necessarily dissipate energy, hence demonstrating Landauer's principle. This provides an alternative method of "energy storage" in an all-electrical device. We finally propose a realistic setup to experimentally observe a Landauer erasure/work extraction cycle.Comment: Accepted for publication PRB, 9 pages, 4 figures, RevTe

    META-pipe cloud setup and execution [version 3; peer review: 2 approved, 1 approved with reservations]

    Get PDF
    META-pipe is a complete service for the analysis of marine metagenomic data. It provides assembly of high-throughput sequence data, functional annotation of predicted genes, and taxonomic profiling. The functional annotation is computationally demanding and is therefore currently run on a high-performance computing cluster in Norway. However, additional compute resources are necessary to open the service to all ELIXIR users. We describe our approach for setting up and executing the functional analysis of META-pipe on additional academic and commercial clouds. Our goal is to provide a powerful analysis service that is easy to use and to maintain. Our design therefore uses a distributed architecture where we combine central servers with multiple distributed backends that execute the computationally intensive jobs. We believe our experiences developing and operating META-pipe provides a useful model for others that plan to provide a portal based data analysis service in ELIXIR and other organizations with geographically distributed compute and storage resources

    Spark-SPELL: Low-latency query-based search for gene expression compendia on cluster computers

    Get PDF
    Exploratory analyses are vital to fully realize the potential for scientific discoveries in large-scale biomedical data compendia. Specifically, most biomedical data analyses require a human expert to interactively explore the data to find novel hypotheses or conclusions. However, recent developments in biotechnology instruments are generating Tera-scale datasets. No interactive biomedical data analysis systems scale to such large datasets. We present the design, implementation and optimization of the SPELL biomedical search algorithm on the Spark framework. We demonstrate the scalability and interactive performance of our Spark-SPELL system. In addition, we demonstrate the performance improvements of our optimizations to the SPELL algorithm and the Spark framework

    META-pipe authorization service

    No full text
    Source code for META-pipe authorization server as described in our F1000 publication. Abstract. We describe the design, implementation, and use of the META-pipe Authorization service. META-pipe is a complete workflow for the analysis of marine metagenomics data. We will provide META-pipe as a web based data analysis service for ELIXIR users. We have integrated our Authorization service with the ELIXIR Authorization and Authentication Infrastructure (AAI) that allows single sign-on to services across the ELIXIR infrastructure. We use the Authorization service to authorize access to data on the META-pipe storage system and jobs in the META-pipe job queue. Our Authorization server was among the first services that integrated with ELIXIR AAI. The code is open source at: https://gitlab.com/uit-sfb/AuthService2

    The MAR databases: development and implementation of databases specific for marine metagenomics

    Get PDF
    We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/

    Mr. Clean: A Tool for Tracking and Comparing the Lineage of Scientific Visualization Code

    No full text
    Proceeding of the Second IEEE Working Conference on Software Visualization VISSOFT 2014, Victoria, British Columbia, Canada, 29-30 September 2014, Session 3, pp 75-78

    Norwegian e-Infrastructure for Life Sciences (NeLS)

    Get PDF
    The Norwegian e-Infrastructure for Life Sciences (NeLS) has been developed by ELIXIR Norway to provide its users with a system enabling data storage, sharing, and analysis in a project-oriented fashion. The system is available through easy-to-use web interfaces, including the Galaxy workbench for data analysis and workflow execution. Users confident with a command-line interface and programming may also access it through Secure Shell (SSH) and application programming interfaces (APIs). NeLS has been in production since 2015, with training and support provided by the help desk of ELIXIR Norway. Through collaboration with NorSeq, the national consortium for high-throughput sequencing, an integrated service is offered so that sequencing data generated in a research project is provided to the involved researchers through NeLS. Sensitive data, such as individual genomic sequencing data, are handled using the TSD (Services for Sensitive Data) platform provided by Sigma2 and the University of Oslo. NeLS integrates national e-infrastructure storage and computing resources, and is also integrated with the SEEK platform in order to store large data files produced by experiments described in SEEK. In this article, we outline the architecture of NeLS and discuss possible directions for further developmentpublishedVersionCopyright: © 2018 Tekle KM et al. This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
    corecore